Developing web crawlers for vertical search engines: a survey of the current research
نویسنده
چکیده
Vertical search engines allow users to query for information within a subset of documents relevant to a pre-determined topic (Chakrabarti, 1999). One challenging aspect to deploying a vertical search engine is building a Web crawler that distinguishes relevant documents from non-relevant documents. In this research, we describe and analyze various methods to crawl relevant documents for vertical search engines, and we examine ways to apply these methods to building a local search engine. In a typical crawl cycle for a vertical search engine, the crawler grabs a URL from the URL frontier, downloads content from the URL, and determines the document’s relevancy to the pre-defined topic. If the document is deemed relevant, it is indexed and its links are added to the URL frontier. Two questions are raised in this process: how do we judge a document’s relevance, and how should we prioritize URLs in the frontier in order to reach the best documents first? To determine the relevancy of a document, we may hold on to a set of pre-determined keywords that we attempt to match in a crawled document’s content and metadata. Another possibility is to use relevance feedback, a mechanism where we train the crawler to spot relevant documents by feeding it training data. In order to prioritize links within the URL frontier, we can use a breadth-first crawler where we just index pages one level at a time, bridges which are pages that aren’t crawled but used to gather more links, reinforcement learning where the crawler is rewarded for reaching relevant pages, and decision trees where the priority given to a link depends on the quality of the parent page. Computer Science Department – Western Washington University
منابع مشابه
A New Approach for Building a Scalable and Adaptive Vertical Search Engine
Search engines are the most important search tools for finding useful and recent information on the Web today. They rely on crawlers that continually crawl the Web for new pages. Meanwhile, focused crawlers have become an attractive area for research in recent years. They suggest a better solution for general-purpose search engine limitations and lead to a new generation of search engines calle...
متن کاملWeb Crawler: Extracting the Web Data
Internet usage has increased a lot in recent times. Users can find their resources by using different hypertext links. This usage of Internet has led to the invention of web crawlers. Web crawlers are full text search engines which assist users in navigating the web. These web crawlers can also be used in further research activities. For e.g. the crawled data can be used to find missing links, ...
متن کاملA New Approach Towards Vertical Search Engines - Intelligent Focused Crawling and Multilingual Semantic Techniques
Search engines typically consist of a crawler which traverses the web retrieving documents and a search frontend which provides the user interface to the acquired information. Focused crawlers refine the crawler by intelligently directing it to predefined topic areas. The evolution of search engines today is expedited by supplying more search capabilities such as a search for metadata as well a...
متن کاملAnalysis of the Temporal Behaviour of Search Engine Crawlers at Web Sites
Web log mining is the extraction of web logs to analyze user behaviour at web sites. In addition to user information, web logs provide immense information about search engine traffic and behaviour. Search engine crawlers are highly automated programs that periodically visit the web site to collect information. The behaviour of search engines could be used in analyzing server load, quality of se...
متن کاملUsing the Web Efficiently: Mobile Crawlers
Search engines have become important tools for Web navigation. In order to provide powerful search facilities, search engines maintain comprehensive indices of documents available on the Web. The creation and maintenance of Web indices is done by Web crawlers, which recursively traverse and download Web pages on behalf of search engines. Analysis of the collected information is performed after ...
متن کامل